Skip to content

Comments

Sprint2/prompt design v1 2.3#8

Merged
Arhum2 merged 10 commits intomainfrom
sprint2/prompt-design-v1-2.3
Feb 24, 2026
Merged

Sprint2/prompt design v1 2.3#8
Arhum2 merged 10 commits intomainfrom
sprint2/prompt-design-v1-2.3

Conversation

@usha-sj
Copy link
Collaborator

@usha-sj usha-sj commented Feb 8, 2026

Feature (PromptV1): Centralized study generation prompt system SOC-2.3

🎉 New feature (Extends backend architecture, non-breaking feature)


PR Summary

Adds a centralized, versioned prompt system for AI study generation.
The Gemini prompt is moved out of main.py into a dedicated module with strict JSON enforcement and few-shot examples to improve reliability without changing the API contract.


Overview

What feature/problem does this PR address?

  • Hardcoded prompt in main.py hard to maintain
  • AI output inconsistencies could break JSON parsing
  • No structure for prompt versioning or iteration

What approach was taken?

  • Created prompts/study_gen_v1.py as the single source of truth
  • Added build_study_generation_prompt() to construct prompts
  • Enforced strict JSON schema matching the API contract
  • Added few-shot examples to stabilize Gemini output
  • Wired endpoint to use the prompt builder

Important design decisions / trade-offs

  • Prompt is versioned (v1) to allow safe future upgrades
  • JSON schema is frozen to preserve frontend compatibility
  • Few-shot examples increase tokens but improve consistency (can be set to False when called in main.py)

Files Changed

File Action Description
backend/prompts/study_gen_v1.py Created Centralized prompt system (v1)
backend/main.py Modified Uses prompt builder instead of inline prompt
backend/README.md Modified Documents prompt architecture

Test Cases / Edge Cases

  • Empty notes rejected by request validator
  • Invalid AI JSON returns safe 500 error
  • Markdown-wrapped AI output cleaned and parsed
  • Quiz quality issues logged (non-blocking)

Checklist

  • Added a clear description
  • Documented edge cases
  • Updated backend documentation
  • Preserved API contract

Additional Notes

  • No frontend changes
  • No API schema changes
  • Future upgrades should create study_gen_v2.py instead of modifying v1
  • Hardcoded prompt is still there just in case, but I commented it out for now. Should probably delete.

Jira Ticket

Jira Ticket(s) - [SOC-2.3]

@usha-sj usha-sj requested review from Arhum2 and alextgu February 8, 2026 23:48
@usha-sj usha-sj self-assigned this Feb 8, 2026
@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 8, 2026

Greptile Summary

This PR extracts hardcoded Gemini prompts from main.py into a dedicated versioned module (backend/prompts/study_gen_v1.py) with few-shot examples and quality validation functions. The refactoring improves maintainability and sets up a versioning strategy for future prompt iterations.

Key improvements:

  • Centralized prompt system with build_study_generation_prompt() function
  • Added few-shot JSON examples to improve AI output consistency
  • Quality validation for quiz questions (duplicate options, invalid answers)
  • Better error logging (logs response length instead of raw content to avoid leaking user data)
  • Updated type hints from list[str] to List[str] for Python 3.8 compatibility
  • Comprehensive documentation in README.md

Issues already flagged in previous comments:

  • Type hints in study_gen_v1.py:280 use Python 3.9+ syntax (list[dict] instead of List[Dict])
  • The EXAMPLES constant structure is actually valid JSON (an array of two objects), not malformed as some comments suggested

Confidence Score: 3/5

  • This PR is relatively safe but has type compatibility issues that may cause runtime failures on Python 3.8.
  • The refactoring is well-structured and improves code organization, but the Python 3.9+ type syntax in study_gen_v1.py:280 (list[dict]) could cause syntax errors on older Python versions. The main.py changes correctly use List/Optional from typing, but the new prompt module doesn't follow this pattern consistently. Most issues from previous review threads have been addressed or are understood.
  • Pay attention to backend/prompts/study_gen_v1.py for type hint compatibility issues that could prevent server startup on Python <3.9

Important Files Changed

Filename Overview
backend/prompts/study_gen_v1.py New centralized prompt system with JSON examples, but EXAMPLES array structure is valid JSON (array of objects). Type hints use Python 3.9+ syntax.
backend/main.py Refactored to use centralized prompt builder, improved type hints to use List/Optional for broader compatibility, better error logging without raw response data.
backend/README.md Added comprehensive documentation for the new prompt system with versioning strategy and usage examples.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[Client Request] --> B[FastAPI Endpoint<br/>/api/v1/generate]
    B --> C[build_study_generation_prompt]
    C --> D{include_examples?}
    D -->|Yes| E[Add SYSTEM_PROMPT +<br/>EXAMPLES + OUTPUT_FORMAT]
    D -->|No| F[Add SYSTEM_PROMPT +<br/>OUTPUT_FORMAT only]
    E --> G[Send to Gemini API]
    F --> G
    G --> H[Receive JSON Response]
    H --> I[clean_response<br/>Remove markdown blocks]
    I --> J[json.loads]
    J --> K[validate_data<br/>Check required fields]
    K --> L[validate_quiz_quality<br/>Check for duplicates/errors]
    L --> M[Return GenerateResponse]
    M --> N[Client receives study pack]
    
    J -.->|JSONDecodeError| O[500 Error]
    K -.->|ValueError| O
    G -.->|None response| O
Loading

Last reviewed commit: 61b7a24

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

@Arhum2
Copy link
Contributor

Arhum2 commented Feb 9, 2026

@greptileai review this again and give me the updated confidence score

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 9, 2026

Greptile Overview

Greptile Summary

Refactors prompt logic from inline string in main.py into a versioned, centralized module (prompts/study_gen_v1.py) with few-shot examples and quality validation.

Key improvements:

  • Structured prompt system with system instructions, output format spec, and examples
  • build_study_generation_prompt() constructs prompts programmatically
  • validate_quiz_quality() detects common issues (duplicate options, invalid answers)
  • Comprehensive documentation in README.md with versioning strategy
  • Non-breaking API changes (preserves frontend compatibility)

Issues found:

  • Python 3.9+ type syntax (list[str], list[QuizQuestion]) in main.py breaks compatibility with Python 3.8
  • Missing __init__.py in prompts/ directory
  • Commented-out code should be removed before merge

Confidence Score: 3/5

  • Safe to merge after fixing Python 3.8 compatibility issues in main.py
  • The refactoring is well-structured and non-breaking, but Python 3.9+ type hints in main.py will cause syntax errors on Python 3.8 deployments. The commented code cleanup and missing init.py are minor style issues
  • backend/main.py requires type hint fixes for Python 3.8 compatibility (lines 54, 64-65)

Important Files Changed

Filename Overview
backend/prompts/study_gen_v1.py New centralized prompt module with versioning, few-shot examples, and validation. Minor: missing init.py for proper package structure
backend/main.py Integrates new prompt system. Issues: Python 3.9+ type syntax breaks 3.8 compatibility, commented code should be removed

Sequence Diagram

sequenceDiagram
    participant Client as Frontend
    participant API as /api/v1/generate
    participant Prompt as study_gen_v1.py
    participant Gemini as GeminiService
    participant Validator as validate_quiz_quality()

    Client->>API: POST /api/v1/generate<br/>{text: "user notes"}
    API->>API: Validate request.text<br/>(not empty)
    API->>Prompt: build_study_generation_prompt()<br/>(user_notes, include_examples=True)
    Prompt->>Prompt: Construct prompt with:<br/>SYSTEM_PROMPT + EXAMPLES +<br/>user_notes + OUTPUT_FORMAT
    Prompt-->>API: Complete prompt string
    API->>Gemini: call_gemini(prompt)
    Gemini-->>API: JSON response<br/>(may have markdown wrapping)
    API->>API: Clean markdown fences<br/>(remove ```json markers)
    API->>API: Parse JSON and validate<br/>(summary, quiz fields)
    API->>Validator: validate_quiz_quality(quiz_data)
    Validator-->>API: warnings list<br/>(logged, non-blocking)
    API-->>Client: GenerateResponse<br/>{summary: [], quiz: []}
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 9, 2026

Additional Comments (2)

backend/main.py
list[str] and list[QuizQuestion] use Python 3.9+ syntax. For Python 3.8 compatibility, import List from typing and use List[str] and List[QuizQuestion]

    options: List[str]
    answer: str
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/main.py
Line: 54:55

Comment:
`list[str]` and `list[QuizQuestion]` use Python 3.9+ syntax. For Python 3.8 compatibility, import `List` from typing and use `List[str]` and `List[QuizQuestion]`

```suggestion
    options: List[str]
    answer: str
```

How can I resolve this? If you propose a fix, please make it concise.

backend/main.py
Same Python 3.9+ syntax issue

    summary: List[str]
    quiz: List[QuizQuestion]
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/main.py
Line: 64:65

Comment:
Same Python 3.9+ syntax issue

```suggestion
    summary: List[str]
    quiz: List[QuizQuestion]
```

How can I resolve this? If you propose a fix, please make it concise.

@Arhum2
Copy link
Contributor

Arhum2 commented Feb 10, 2026

@greptileai review this again with the new version of main and give me the confidence score aswell

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 10, 2026

Greptile Overview

Greptile Summary

This PR centralizes the Gemini study-generation prompt into backend/prompts/study_gen_v1.py and wires /api/v1/generate to use build_study_generation_prompt() instead of an inline f-string prompt. It also adds a lightweight validate_quiz_quality() pass to emit warnings about quiz output structure.

Main issues to address before merge are around integration/robustness: the new import path in backend/main.py conflicts with the documented uvicorn main:app run mode (startup import failure), the few-shot example scaffolding includes non-JSON tokens that can prompt Gemini to emit non-JSON (breaking json.loads()), and the endpoint prints raw model output (which may contain user notes) directly to stdout.

Confidence Score: 2/5

  • Not safe to merge until startup/import and output-parsing risks are addressed.
  • The PR introduces a run-context import that will break the common uvicorn main:app invocation, and the prompt’s few-shot scaffolding can cause non-JSON model output that the endpoint can’t parse. Additionally, raw model output is printed to stdout, which can leak user-provided notes into logs.
  • backend/main.py, backend/prompts/study_gen_v1.py

Important Files Changed

Filename Overview
backend/README.md Documents the new centralized study prompt module and how it’s used by /api/v1/generate.
backend/main.py Switches generate endpoint to use the prompt builder and adds quiz quality warnings; introduces an import/run-context mismatch and logs raw AI output to stdout.
backend/prompts/study_gen_v1.py Adds centralized prompt builders and quiz-quality validation; few-shot examples include non-JSON scaffolding that can induce non-JSON model outputs.

Sequence Diagram

sequenceDiagram
  autonumber
  participant Client
  participant API as FastAPI (/api/v1/generate)
  participant Prompt as study_gen_v1.build_study_generation_prompt
  participant Gemini as GeminiService.call_gemini

  Client->>API: POST /api/v1/generate {text}
  API->>Prompt: build_study_generation_prompt(user_notes, include_examples)
  Prompt-->>API: prompt string
  API->>Gemini: call_gemini(prompt)
  Gemini-->>API: response.text (string)
  API->>API: strip markdown fences + json.loads
  API->>API: validate quiz items + validate_quiz_quality
  API-->>Client: 200 {summary, quiz}
  alt Gemini returns None
    API-->>Client: 500 Failed to generate
  else JSON decode / structure error
    API-->>Client: 500 Failed to parse / Invalid format
  end
Loading

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

Comment on lines 176 to 180
# Optional: Run quality checks on the quiz
quality_warnings = validate_quiz_quality(data.get("quiz", []))
if quality_warnings:
print(f"[generate] Quality warnings: {quality_warnings}")
# Can log these or return them to the frontend in the future
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sensitive data logged to stdout

On invalid/failed responses, this endpoint prints Raw response: {response} and also prints quality_warnings unconditionally. Gemini output can contain user-provided notes verbatim, so this will leak user content into server logs. Since this PR adds additional logging paths, it should be gated/removed or switched to structured logging with redaction (and avoid printing raw model output).

Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/main.py
Line: 176:180

Comment:
**Sensitive data logged to stdout**

On invalid/failed responses, this endpoint prints `Raw response: {response}` and also prints `quality_warnings` unconditionally. Gemini output can contain user-provided notes verbatim, so this will leak user content into server logs. Since this PR adds additional logging paths, it should be gated/removed or switched to structured logging with redaction (and avoid printing raw model output).

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines 83 to 90
EXAMPLES = """Here are examples of correct output format:

--- EXAMPLE 1 ---

INPUT NOTES:
"Photosynthesis is the process by which plants convert sunlight into energy. It occurs in chloroplasts and requires carbon dioxide and water. The outputs are glucose and oxygen."

CORRECT OUTPUT:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Few-shot violates JSON-only rule

SYSTEM_PROMPT requires “ONLY valid JSON”, but EXAMPLES includes headers like --- EXAMPLE 1 ---, INPUT NOTES:, and CORRECT OUTPUT:. Because the final prompt contains these non-JSON tokens, the model may mirror that structure and prepend similar headers in its response, directly causing json.loads() to fail. If strict JSON output is required, keep the few-shot content itself purely JSON (or clearly isolate examples as non-output content and add an explicit anti-pattern warning).

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/prompts/study_gen_v1.py
Line: 83:90

Comment:
**Few-shot violates JSON-only rule**

`SYSTEM_PROMPT` requires “ONLY valid JSON”, but `EXAMPLES` includes headers like `--- EXAMPLE 1 ---`, `INPUT NOTES:`, and `CORRECT OUTPUT:`. Because the final prompt contains these non-JSON tokens, the model may mirror that structure and prepend similar headers in its response, directly causing `json.loads()` to fail. If strict JSON output is required, keep the few-shot content itself purely JSON (or clearly isolate examples as non-output content and add an explicit anti-pattern warning).

<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>

How can I resolve this? If you propose a fix, please make it concise.

@usha-sj usha-sj force-pushed the sprint2/prompt-design-v1-2.3 branch from 10937ad to 55e4f5e Compare February 22, 2026 23:41
@Arhum2
Copy link
Contributor

Arhum2 commented Feb 23, 2026

@greptileai

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 23, 2026

Greptile Summary

This PR centralizes prompt engineering into backend/prompts/study_gen_v1.py and updates main.py to use the new prompt builder. The architectural change is sound, but the PR has critical blocking issues.

Critical issues:

  • backend/README.md contains unresolved merge conflict markers (<<<<<<<, =======, >>>>>>>) that will break documentation rendering
  • Missing __init__.py in backend/prompts/ directory prevents Python from recognizing it as a package, causing import failures
  • Few-shot examples in EXAMPLES include non-JSON formatting (--- EXAMPLE 1 ---, INPUT NOTES:) that contradicts the "ONLY valid JSON" requirement in SYSTEM_PROMPT - this may cause Gemini to mirror the structure and break JSON parsing

Additional notes:

  • Commented-out code in main.py (lines 171-190) should be removed per the PR description
  • The import path from backend.prompts.study_gen_v1 conflicts with the documented run command uvicorn main:app --reload (executed from within backend/)

Confidence Score: 0/5

  • This PR cannot be merged - contains syntax errors that will break the build
  • README has unresolved merge conflicts and prompts directory lacks init.py, both causing immediate failures. The prompt design also contradicts its own JSON-only requirement.
  • backend/README.md (merge conflict), backend/main.py (broken import), backend/prompts/study_gen_v1.py (prompt design issue)

Important Files Changed

Filename Overview
backend/README.md unresolved merge conflict prevents file from being parsed
backend/main.py import will fail due to missing init.py in prompts directory
backend/prompts/study_gen_v1.py prompt design contradicts JSON-only requirement with formatted examples

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[Client POST /api/v1/generate] --> B[main.py: generate_study_materials]
    B --> C[build_study_generation_prompt]
    C --> D[SYSTEM_PROMPT]
    C --> E[EXAMPLES few-shot]
    C --> F[OUTPUT_FORMAT]
    C --> G[user_notes]
    D --> H[Complete Prompt]
    E --> H
    F --> H
    G --> H
    H --> I[gemini_service.call_gemini]
    I --> J[Gemini API Response]
    J --> K[clean_response]
    K --> L[json.loads]
    L --> M[validate_data]
    M --> N[validate_quiz_quality]
    N --> O[Return GenerateResponse]
    
    style C fill:#90EE90
    style I fill:#FFB6C1
    style L fill:#FFD700
    style M fill:#FFD700
Loading

Last reviewed commit: 55e4f5e

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

Comment on lines 50 to 174
<<<<<<< HEAD
Run study pack test (require Gemini API key):

```bash
python tests/test_studypack.py
```
=======

---

## Study Generation Prompt System (v1)

The backend now uses a centralized prompt module for AI study generation.

Location:

```
backend/prompts/study_gen_v1.py
```

This module defines how Gemini transforms raw notes into:

- bullet-point summaries
- multiple-choice quiz questions


### How it connects to the API

The `/api/v1/generate` endpoint builds a prompt using:

```python
prompt = build_study_generation_prompt(request.text)
```

Then sends it to Gemini:

```python
response = await gemini_service.call_gemini(prompt)
```

The prompt module handles:

- system instructions
- output schema
- few-shot examples
- formatting rules
- quality validation


### Prompt builder functions

#### Full study pack

```python
build_study_generation_prompt(notes)
```

Returns summary + quiz.

#### Quiz only

```python
build_custom_quiz_prompt(notes, num_questions=3)
```

Generates quiz-only output.

#### Summary only

```python
build_summary_only_prompt(notes)
```

Generates summary-only output.


### Versioning

Current version:

```
study_gen_v1.py → VERSION 1.0.0
```

Future prompt improvements should create:

```
study_gen_v2.py
study_gen_v3.py
```

Never silently change v1 behavior — version prompts explicitly.


### Quality checks

After Gemini responds, quiz output is validated using:

```python
validate_quiz_quality()
```

This detects:

- duplicate options
- invalid answers
- weak question structure

Warnings are logged for debugging.


### Editing prompts safely

If you change prompt behavior:

1. Keep JSON schema identical
2. Do not change API response format
3. Test with messy notes input
4. Verify frontend still parses correctly

Breaking schema can end up breaking frontend.

---

>>>>>>> 39ed2b2 (Updated readme on study prompt generation)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unresolved merge conflict markers still present in file - build will fail

Comment on lines 83 to 145
EXAMPLES = """Here are examples of correct output format:

--- EXAMPLE 1 ---

INPUT NOTES:
"Photosynthesis is the process by which plants convert sunlight into energy. It occurs in chloroplasts and requires carbon dioxide and water. The outputs are glucose and oxygen."

CORRECT OUTPUT:
{
"summary": [
"Photosynthesis converts sunlight into chemical energy (glucose) in plant cells",
"Occurs in chloroplasts and requires CO₂ and H₂O as inputs",
"Produces glucose for plant energy and oxygen as a byproduct"
],
"quiz": [
{
"question": "Where does photosynthesis take place in plant cells?",
"options": ["Mitochondria", "Chloroplasts", "Nucleus", "Cell wall"],
"answer": "Chloroplasts"
},
{
"question": "Which of the following is a product of photosynthesis?",
"options": ["Carbon dioxide", "Water", "Oxygen", "Nitrogen"],
"answer": "Oxygen"
},
{
"question": "What is the primary energy source for photosynthesis?",
"options": ["Heat", "Sunlight", "Chemical energy", "Wind"],
"answer": "Sunlight"
}
]
}

--- EXAMPLE 2 ---

INPUT NOTES:
"The water cycle includes evaporation, condensation, and precipitation. Water evaporates from oceans and lakes, forms clouds through condensation, and returns to Earth as rain or snow."

CORRECT OUTPUT:
{
"summary": [
"The water cycle is a continuous process of water movement on Earth",
"Evaporation occurs when water from oceans and lakes becomes water vapor",
"Condensation forms clouds, and precipitation returns water to Earth's surface"
],
"quiz": [
{
"question": "What happens during evaporation in the water cycle?",
"options": ["Water falls as rain", "Water becomes vapor", "Clouds form", "Ice melts"],
"answer": "Water becomes vapor"
},
{
"question": "Which process forms clouds in the water cycle?",
"options": ["Evaporation", "Precipitation", "Condensation", "Filtration"],
"answer": "Condensation"
},
{
"question": "What are the main forms of precipitation?",
"options": ["Steam and vapor", "Rain and snow", "Clouds and fog", "Rivers and lakes"],
"answer": "Rain and snow"
}
]
}"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

few-shot examples contain non-JSON headers (--- EXAMPLE 1 ---, INPUT NOTES:, CORRECT OUTPUT:), but line 48 requires "ONLY valid JSON" - model may mirror this structure and break parsing

backend/main.py Outdated
from services import GeminiService

# Import the new prompt system
from backend.prompts.study_gen_v1 import (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing __init__.py in prompts/ directory - Python won't recognize it as a package and import will fail

@usha-sj
Copy link
Collaborator Author

usha-sj commented Feb 24, 2026

@greptileai

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 files reviewed, 5 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +168 to +169
"Examples of valid JSON output (do not copy any labels; return ONLY JSON):",
EXAMPLES,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instruction tells model not to copy labels, but the EXAMPLES constant (lines 83-145) includes non-JSON headers like EXAMPLE 1, INPUT NOTES:, and CORRECT OUTPUT:. The model may mirror this structure despite the warning. Consider restructuring examples to be pure JSON or make the warning more explicit

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

"",
f"Process these notes and generate study materials:",
"",
"Return ONLY a JSON object. Do not include any headings, labels, or extra text.",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instruction says "Return ONLY a JSON object" but immediately precedes the user notes. Consider placing this instruction AFTER the user notes as the last thing before generation, which may be more effective for output format compliance

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 24, 2026

Additional Comments (2)

backend/main.py
Inconsistent JSON cleaning - the /api/v1/generate route uses inline cleaning logic (lines 192-207) while this route calls clean_response(). Either both routes should use the helper function or both should use inline logic for maintainability

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!


backend/main.py
Missing quality validation - the /api/v1/generate route calls validate_quiz_quality() on line 214, but this route doesn't. Consider adding the same quality checks for consistency

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

usha-sj and others added 2 commits February 23, 2026 23:29
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
@usha-sj
Copy link
Collaborator Author

usha-sj commented Feb 24, 2026

@greptileai

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 files reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

@usha-sj
Copy link
Collaborator Author

usha-sj commented Feb 24, 2026

@greptileai

@Arhum2 Arhum2 merged commit 600aeef into main Feb 24, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants